Goto

Collaborating Authors

 likelihood-free importance weighting


Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

Neural Information Processing Systems

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We employ this likelihood-free importance weighting method to correct for the bias in generative models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.


Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

Neural Information Processing Systems

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We employ this likelihood-free importance weighting method to correct for the bias in generative models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative models, suggesting reduced bias.


Reviews: Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

Neural Information Processing Systems

Congratulations, your paper has been accepted for publication at NeurIPS2019. The reviewers found it to be a novel well executed piece of work. When preparing the camera ready version, please bear in mind the reviewers comments. In particular - Please carefully define what bias is. The footnote on p1 is somewhat vague.


Reviews: Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

Neural Information Processing Systems

Summary The paper propose a method for correcting the bias in the outcomes of pretrained deep generative models. Given data from a generator distribution and the real distribution, the paper uses importance reweighting to up/down-weigh the generated samples. The importance weights are computed using a probabilistic binary classifier that predicts the identity of the data distribution. Experiments are shown on several tasks to show that the importance reweighting improves the task performance. The importance weighting using binary classification is a well-known technique.


Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

Neural Information Processing Systems

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We employ this likelihood-free importance weighting method to correct for the bias in generative models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative models, suggesting reduced bias.


Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

Grover, Aditya, Song, Jiaming, Kapoor, Ashish, Tran, Kenneth, Agarwal, Alekh, Horvitz, Eric J., Ermon, Stefano

Neural Information Processing Systems

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We employ this likelihood-free importance weighting method to correct for the bias in generative models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative models, suggesting reduced bias.